606 research outputs found
Deep Random based Key Exchange protocol resisting unlimited MITM
We present a protocol enabling two legitimate partners sharing an initial
secret to mutually authenticate and to exchange an encryption session key. The
opponent is an active Man In The Middle (MITM) with unlimited computation and
storage capacities. The resistance to unlimited MITM is obtained through the
combined use of Deep Random secrecy, formerly introduced and proved as
unconditionally secure against passive opponent for key exchange, and universal
hashing techniques. We prove the resistance to MITM interception attacks, and
show that (i) upon successful completion, the protocol leaks no residual
information about the current value of the shared secret to the opponent, and
(ii) that any unsuccessful completion is detectable by the legitimate partners.
We also discuss implementation techniques.Comment: 14 pages. V2: Updated reminder in the formalism of Deep Random
assumption. arXiv admin note: text overlap with arXiv:1611.01683,
arXiv:1507.0825
An Introduction to Superconducting Qubits and Circuit Quantum Electrodynamics
A subset of the concepts of circuit quantum electrodynamics are reviewed as a
reference to the Axion Dark Matter Experiment (ADMX) community as part of the
proceedings of the 2nd Workshop on Microwave Cavities and Detectors for Axion
Research. The classical Lagrangians and Hamiltonians for an LC circuit are
discussed along with black box circuit quantization methods for a weakly
anharmonic qubit coupled to a resonator or cavity
An entropy-based class assignment detection approach for RDF data
The RDF-style Knowledge Bases usually contain a certain level of noises known as Semantic Web data quality issues. This paper has introduced a new Semantic Web data quality issue called Incorrect Class Assignment problem that shows the incorrect assignment between instances in the instance-level and corresponding classes in an ontology. We have proposed an approach called CAD (Class Assignment Detector) to find the correctness and incorrectness of relationships between instances and classes by analyzing features of classes in an ontology. Initial experiments conducted on a dataset demonstrate the effectiveness of CAD
Weak Chaos from Tsallis Entropy
We present a geometric, model-independent, argument that aims to explain why
the Tsallis entropy describes systems exhibiting "weak chaos", namely systems
whose underlying dynamics has vanishing largest Lyapunov exponent. Our argument
relies on properties of a deformation map of the reals induced by the Tsallis
entropy, and its conclusion agrees with all currently known results.Comment: 19 pages, Standard LaTeX2e, v2: addition of the last paragraph in
Section 4. Three additional refs. To be published in QScience Connec
Fundamental Aspects of the ISM Fractality
The ubiquitous clumpy state of the ISM raises a fundamental and open problem
of physics, which is the correct statistical treatment of systems dominated by
long range interactions. A simple solvable hierarchical model is presented
which explains why systems dominated by gravity prefer to adopt a fractal
dimension around 2 or less, like the cold ISM and large scale structures. This
has direct relation with the general transparency, or blackness, of the
Universe.Comment: 6 pages, LaTeX2e, crckapb macro, no figure, uuencoded compressed tar
file. To be published in the proceeedings of the "Dust-Morphology"
conference, Johannesburg, 22-26 January, 1996, D. Block (ed.), (Kluwer
Dordrecht
A categorical foundation for Bayesian probability
Given two measurable spaces and with countably generated
-algebras, a perfect prior probability measure on and a
sampling distribution , there is a corresponding inference
map which is unique up to a set of measure zero. Thus,
given a data measurement , a posterior probability
can be computed. This procedure is iterative: with
each updated probability , we obtain a new joint distribution which in
turn yields a new inference map and the process repeats with each
additional measurement. The main result uses an existence theorem for regular
conditional probabilities by Faden, which holds in more generality than the
setting of Polish spaces. This less stringent setting then allows for
non-trivial decision rules (Eilenberg--Moore algebras) on finite (as well as
non finite) spaces, and also provides for a common framework for decision
theory and Bayesian probability.Comment: 15 pages; revised setting to more clearly explain how to incorporate
perfect measures and the Giry monad; to appear in Applied Categorical
Structure
Maximum Causal Entropy Specification Inference from Demonstrations
In many settings (e.g., robotics) demonstrations provide a natural way to
specify tasks; however, most methods for learning from demonstrations either do
not provide guarantees that the artifacts learned for the tasks, such as
rewards or policies, can be safely composed and/or do not explicitly capture
history dependencies. Motivated by this deficit, recent works have proposed
learning Boolean task specifications, a class of Boolean non-Markovian rewards
which admit well-defined composition and explicitly handle historical
dependencies. This work continues this line of research by adapting maximum
causal entropy inverse reinforcement learning to estimate the posteriori
probability of a specification given a multi-set of demonstrations. The key
algorithmic insight is to leverage the extensive literature and tooling on
reduced ordered binary decision diagrams to efficiently encode a time unrolled
Markov Decision Process. This enables transforming a naive exponential time
algorithm into a polynomial time algorithm.Comment: Computer Aided Verification, 202
Bayesian astrostatistics: a backward look to the future
This perspective chapter briefly surveys: (1) past growth in the use of
Bayesian methods in astrophysics; (2) current misconceptions about both
frequentist and Bayesian statistical inference that hinder wider adoption of
Bayesian methods by astronomers; and (3) multilevel (hierarchical) Bayesian
modeling as a major future direction for research in Bayesian astrostatistics,
exemplified in part by presentations at the first ISI invited session on
astrostatistics, commemorated in this volume. It closes with an intentionally
provocative recommendation for astronomical survey data reporting, motivated by
the multilevel Bayesian perspective on modeling cosmic populations: that
astronomers cease producing catalogs of estimated fluxes and other source
properties from surveys. Instead, summaries of likelihood functions (or
marginal likelihood functions) for source properties should be reported (not
posterior probability density functions), including nontrivial summaries (not
simply upper limits) for candidate objects that do not pass traditional
detection thresholds.Comment: 27 pp, 4 figures. A lightly revised version of a chapter in
"Astrostatistical Challenges for the New Astronomy" (Joseph M. Hilbe, ed.,
Springer, New York, forthcoming in 2012), the inaugural volume for the
Springer Series in Astrostatistics. Version 2 has minor clarifications and an
additional referenc
Which Distributions (or Families of Distributions) Best Represent Interval Uncertainty: Case of Permutation-Invariant Criteria
In many practical situations, we only know the interval containing the quantity of interest, we have no information about the probability of different values within this interval. In contrast to the cases when we know the distributions and can thus use Monte-Carlo simulations, processing such interval uncertainty is difficult -- crudely speaking, because we need to try all possible distributions on this interval. Sometimes, the problem can be simplified: namely, it is possible to select a single distribution (or a small family of distributions) whose analysis provides a good understanding of the situation. The most known case is when we use the Maximum Entropy approach and get the uniform distribution on the interval. Interesting, sensitivity analysis -- which has completely different objectives -- leads to selection of the same uniform distribution. In this paper, we provide a general explanation of why uniform distribution appears in different situations -- namely, it appears every time we have a permutation-invariant objective functions with the unique optimum. We also discuss what happens if there are several optima
The Blind Watchmaker Network: Scale-freeness and Evolution
It is suggested that the degree distribution for networks of the
cell-metabolism for simple organisms reflects an ubiquitous randomness. This
implies that natural selection has exerted no or very little pressure on the
network degree distribution during evolution. The corresponding random network,
here termed the blind watchmaker network has a power-law degree distribution
with an exponent gamma >= 2. It is random with respect to a complete set of
network states characterized by a description of which links are attached to a
node as well as a time-ordering of these links. No a priory assumption of any
growth mechanism or evolution process is made. It is found that the degree
distribution of the blind watchmaker network agrees very precisely with that of
the metabolic networks. This implies that the evolutionary pathway of the
cell-metabolism, when projected onto a metabolic network representation, has
remained statistically random with respect to a complete set of network states.
This suggests that even a biological system, which due to natural selection has
developed an enormous specificity like the cellular metabolism, nevertheless
can, at the same time, display well defined characteristics emanating from the
ubiquitous inherent random element of Darwinian evolution. The fact that also
completely random networks may have scale-free node distributions gives a new
perspective on the origin of scale-free networks in general.Comment: 5 pages, 3 figure
- …